-
Notifications
You must be signed in to change notification settings - Fork 15.2k
[mlir][sparse] Include sparse emit strategy in wrapping iterator #165611
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@llvm/pr-subscribers-mlir Author: Jordan Rupprecht (rupprecht) ChangesWhen we create a After construction, e.g. in If we make Without this, the UB of strategy being uninitialized manifests as a sporadic test failure in mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_strided_conv_2d_nhwc_hwcf.mlir, when run downstream with the right flags (e.g. asan + assertions off). The test sometimes fails with Full diff: https://github.com/llvm/llvm-project/pull/165611.diff 2 Files Affected:
diff --git a/mlir/lib/Dialect/SparseTensor/Transforms/Utils/SparseTensorIterator.cpp b/mlir/lib/Dialect/SparseTensor/Transforms/Utils/SparseTensorIterator.cpp
index 46d0baac58f06..61b5ad600a16e 100644
--- a/mlir/lib/Dialect/SparseTensor/Transforms/Utils/SparseTensorIterator.cpp
+++ b/mlir/lib/Dialect/SparseTensor/Transforms/Utils/SparseTensorIterator.cpp
@@ -504,6 +504,14 @@ class SimpleWrapIterator : public SparseIterator {
unsigned extraCursorVal = 0)
: SparseIterator(kind, *wrap, extraCursorVal), wrap(std::move(wrap)) {}
+ void setSparseEmitStrategy(SparseEmitStrategy strategy) override {
+ wrap->setSparseEmitStrategy(strategy);
+ }
+
+ SparseEmitStrategy getSparseEmitStrategy() const override {
+ return wrap->getSparseEmitStrategy();
+ }
+
SmallVector<Type> getCursorValTypes(OpBuilder &b) const override {
return wrap->getCursorValTypes(b);
}
@@ -979,7 +987,7 @@ class SubSectIterator : public SparseIterator {
void SparseIterator::genInit(OpBuilder &b, Location l,
const SparseIterator *p) {
- if (emitStrategy == SparseEmitStrategy::kDebugInterface) {
+ if (getSparseEmitStrategy() == SparseEmitStrategy::kDebugInterface) {
std::string prefix = getDebugInterfacePrefix();
Operation *begin = b.create(l, b.getStringAttr(prefix + ".begin"), {},
getCursorValTypes(b));
@@ -994,7 +1002,7 @@ void SparseIterator::genInit(OpBuilder &b, Location l,
}
Value SparseIterator::genNotEnd(OpBuilder &b, Location l) {
- if (emitStrategy == SparseEmitStrategy::kDebugInterface) {
+ if (getSparseEmitStrategy() == SparseEmitStrategy::kDebugInterface) {
std::string prefix = getDebugInterfacePrefix();
Operation *notEnd = b.create(l, b.getStringAttr(prefix + ".not_end"),
getCursor(), b.getI1Type());
@@ -1005,7 +1013,7 @@ Value SparseIterator::genNotEnd(OpBuilder &b, Location l) {
}
void SparseIterator::locate(OpBuilder &b, Location l, Value crd) {
- if (emitStrategy == SparseEmitStrategy::kDebugInterface) {
+ if (getSparseEmitStrategy() == SparseEmitStrategy::kDebugInterface) {
std::string prefix = getDebugInterfacePrefix();
SmallVector<Value> args = getCursor();
args.push_back(crd);
@@ -1019,7 +1027,7 @@ void SparseIterator::locate(OpBuilder &b, Location l, Value crd) {
}
Value SparseIterator::deref(OpBuilder &b, Location l) {
- if (emitStrategy == SparseEmitStrategy::kDebugInterface) {
+ if (getSparseEmitStrategy() == SparseEmitStrategy::kDebugInterface) {
std::string prefix = getDebugInterfacePrefix();
SmallVector<Value> args = getCursor();
Operation *deref = b.create(l, b.getStringAttr(prefix + ".deref"),
@@ -1032,7 +1040,7 @@ Value SparseIterator::deref(OpBuilder &b, Location l) {
ValueRange SparseIterator::forward(OpBuilder &b, Location l) {
assert(!randomAccessible());
- if (emitStrategy == SparseEmitStrategy::kDebugInterface) {
+ if (getSparseEmitStrategy() == SparseEmitStrategy::kDebugInterface) {
std::string prefix = getDebugInterfacePrefix();
Operation *next = b.create(l, b.getStringAttr(prefix + ".next"),
getCursor(), getCursorValTypes(b));
diff --git a/mlir/lib/Dialect/SparseTensor/Transforms/Utils/SparseTensorIterator.h b/mlir/lib/Dialect/SparseTensor/Transforms/Utils/SparseTensorIterator.h
index 642cb1afa156b..3636f3f01adb5 100644
--- a/mlir/lib/Dialect/SparseTensor/Transforms/Utils/SparseTensorIterator.h
+++ b/mlir/lib/Dialect/SparseTensor/Transforms/Utils/SparseTensorIterator.h
@@ -177,10 +177,14 @@ class SparseIterator {
public:
virtual ~SparseIterator() = default;
- void setSparseEmitStrategy(SparseEmitStrategy strategy) {
+ virtual void setSparseEmitStrategy(SparseEmitStrategy strategy) {
emitStrategy = strategy;
}
+ virtual SparseEmitStrategy getSparseEmitStrategy() const {
+ return emitStrategy;
+ }
+
virtual std::string getDebugInterfacePrefix() const = 0;
virtual SmallVector<Type> getCursorValTypes(OpBuilder &b) const = 0;
|
|
@llvm/pr-subscribers-mlir-sparse Author: Jordan Rupprecht (rupprecht) ChangesWhen we create a After construction, e.g. in If we make Without this, the UB of strategy being uninitialized manifests as a sporadic test failure in mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_strided_conv_2d_nhwc_hwcf.mlir, when run downstream with the right flags (e.g. asan + assertions off). The test sometimes fails with Full diff: https://github.com/llvm/llvm-project/pull/165611.diff 2 Files Affected:
diff --git a/mlir/lib/Dialect/SparseTensor/Transforms/Utils/SparseTensorIterator.cpp b/mlir/lib/Dialect/SparseTensor/Transforms/Utils/SparseTensorIterator.cpp
index 46d0baac58f06..61b5ad600a16e 100644
--- a/mlir/lib/Dialect/SparseTensor/Transforms/Utils/SparseTensorIterator.cpp
+++ b/mlir/lib/Dialect/SparseTensor/Transforms/Utils/SparseTensorIterator.cpp
@@ -504,6 +504,14 @@ class SimpleWrapIterator : public SparseIterator {
unsigned extraCursorVal = 0)
: SparseIterator(kind, *wrap, extraCursorVal), wrap(std::move(wrap)) {}
+ void setSparseEmitStrategy(SparseEmitStrategy strategy) override {
+ wrap->setSparseEmitStrategy(strategy);
+ }
+
+ SparseEmitStrategy getSparseEmitStrategy() const override {
+ return wrap->getSparseEmitStrategy();
+ }
+
SmallVector<Type> getCursorValTypes(OpBuilder &b) const override {
return wrap->getCursorValTypes(b);
}
@@ -979,7 +987,7 @@ class SubSectIterator : public SparseIterator {
void SparseIterator::genInit(OpBuilder &b, Location l,
const SparseIterator *p) {
- if (emitStrategy == SparseEmitStrategy::kDebugInterface) {
+ if (getSparseEmitStrategy() == SparseEmitStrategy::kDebugInterface) {
std::string prefix = getDebugInterfacePrefix();
Operation *begin = b.create(l, b.getStringAttr(prefix + ".begin"), {},
getCursorValTypes(b));
@@ -994,7 +1002,7 @@ void SparseIterator::genInit(OpBuilder &b, Location l,
}
Value SparseIterator::genNotEnd(OpBuilder &b, Location l) {
- if (emitStrategy == SparseEmitStrategy::kDebugInterface) {
+ if (getSparseEmitStrategy() == SparseEmitStrategy::kDebugInterface) {
std::string prefix = getDebugInterfacePrefix();
Operation *notEnd = b.create(l, b.getStringAttr(prefix + ".not_end"),
getCursor(), b.getI1Type());
@@ -1005,7 +1013,7 @@ Value SparseIterator::genNotEnd(OpBuilder &b, Location l) {
}
void SparseIterator::locate(OpBuilder &b, Location l, Value crd) {
- if (emitStrategy == SparseEmitStrategy::kDebugInterface) {
+ if (getSparseEmitStrategy() == SparseEmitStrategy::kDebugInterface) {
std::string prefix = getDebugInterfacePrefix();
SmallVector<Value> args = getCursor();
args.push_back(crd);
@@ -1019,7 +1027,7 @@ void SparseIterator::locate(OpBuilder &b, Location l, Value crd) {
}
Value SparseIterator::deref(OpBuilder &b, Location l) {
- if (emitStrategy == SparseEmitStrategy::kDebugInterface) {
+ if (getSparseEmitStrategy() == SparseEmitStrategy::kDebugInterface) {
std::string prefix = getDebugInterfacePrefix();
SmallVector<Value> args = getCursor();
Operation *deref = b.create(l, b.getStringAttr(prefix + ".deref"),
@@ -1032,7 +1040,7 @@ Value SparseIterator::deref(OpBuilder &b, Location l) {
ValueRange SparseIterator::forward(OpBuilder &b, Location l) {
assert(!randomAccessible());
- if (emitStrategy == SparseEmitStrategy::kDebugInterface) {
+ if (getSparseEmitStrategy() == SparseEmitStrategy::kDebugInterface) {
std::string prefix = getDebugInterfacePrefix();
Operation *next = b.create(l, b.getStringAttr(prefix + ".next"),
getCursor(), getCursorValTypes(b));
diff --git a/mlir/lib/Dialect/SparseTensor/Transforms/Utils/SparseTensorIterator.h b/mlir/lib/Dialect/SparseTensor/Transforms/Utils/SparseTensorIterator.h
index 642cb1afa156b..3636f3f01adb5 100644
--- a/mlir/lib/Dialect/SparseTensor/Transforms/Utils/SparseTensorIterator.h
+++ b/mlir/lib/Dialect/SparseTensor/Transforms/Utils/SparseTensorIterator.h
@@ -177,10 +177,14 @@ class SparseIterator {
public:
virtual ~SparseIterator() = default;
- void setSparseEmitStrategy(SparseEmitStrategy strategy) {
+ virtual void setSparseEmitStrategy(SparseEmitStrategy strategy) {
emitStrategy = strategy;
}
+ virtual SparseEmitStrategy getSparseEmitStrategy() const {
+ return emitStrategy;
+ }
+
virtual std::string getDebugInterfacePrefix() const = 0;
virtual SmallVector<Type> getCursorValTypes(OpBuilder &b) const = 0;
|
|
If this fixes an UB, then it is arguably no longer an NFC, right? [I am not 100% sure about our terminology]. |
|
Thanks!
I've heard a wide range of opinions for what's considered NFC. This patch feels NFC to me because, outside of the UB issue, nothing should be changing. But I can totally see how removing UB is not NFC. I'll err on the side of not claiming this is NFC. |
…m#165611) When we create a `SparseIterator`, we sometimes wrap it in a `FilterIterator`, which delegates _some_ calls to the underlying `SparseIterator`. After construction, e.g. in `makeNonEmptySubSectIterator()`, we call `setSparseEmitStrategy()`. This sets the strategy only in one of the filters -- if we call `setSparseEmitStrategy()` immediately after creating the `SparseIterator`, then the wrapped `SparseIterator` will have the right strategy, and the `FilterIterator` strategy will be unintialized; if we call `setSparseEmitStrategy()` after wrapping the iterator in `FilterIterator`, then the opposite happens. If we make `setSparseEmitStrategy()` a virtual method so that it's included in the `FilterIterator` pattern, and then do all reads of `emitStrategy` via a virtual method as well, it's pretty simple to ensure that the value of `strategy` is being set consistently and correctly. Without this, the UB of strategy being uninitialized manifests as a sporadic test failure in mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_strided_conv_2d_nhwc_hwcf.mlir, when run downstream with the right flags (e.g. asan + assertions off). The test sometimes fails with `ne_sub<trivial<dense[0,1]>>.begin' op created with unregistered dialect`. It can also be directly observed w/ msan that this uninitialized read is the cause of that issue, but msan causes other problems w/ this test.
…m#165611) When we create a `SparseIterator`, we sometimes wrap it in a `FilterIterator`, which delegates _some_ calls to the underlying `SparseIterator`. After construction, e.g. in `makeNonEmptySubSectIterator()`, we call `setSparseEmitStrategy()`. This sets the strategy only in one of the filters -- if we call `setSparseEmitStrategy()` immediately after creating the `SparseIterator`, then the wrapped `SparseIterator` will have the right strategy, and the `FilterIterator` strategy will be unintialized; if we call `setSparseEmitStrategy()` after wrapping the iterator in `FilterIterator`, then the opposite happens. If we make `setSparseEmitStrategy()` a virtual method so that it's included in the `FilterIterator` pattern, and then do all reads of `emitStrategy` via a virtual method as well, it's pretty simple to ensure that the value of `strategy` is being set consistently and correctly. Without this, the UB of strategy being uninitialized manifests as a sporadic test failure in mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_strided_conv_2d_nhwc_hwcf.mlir, when run downstream with the right flags (e.g. asan + assertions off). The test sometimes fails with `ne_sub<trivial<dense[0,1]>>.begin' op created with unregistered dialect`. It can also be directly observed w/ msan that this uninitialized read is the cause of that issue, but msan causes other problems w/ this test.
When we create a
SparseIterator, we sometimes wrap it in aFilterIterator, which delegates some calls to the underlyingSparseIterator.After construction, e.g. in
makeNonEmptySubSectIterator(), we callsetSparseEmitStrategy(). This sets the strategy only in one of the filters -- if we callsetSparseEmitStrategy()immediately after creating theSparseIterator, then the wrappedSparseIteratorwill have the right strategy, and theFilterIteratorstrategy will be unintialized; if we callsetSparseEmitStrategy()after wrapping the iterator inFilterIterator, then the opposite happens.If we make
setSparseEmitStrategy()a virtual method so that it's included in theFilterIteratorpattern, and then do all reads ofemitStrategyvia a virtual method as well, it's pretty simple to ensure that the value ofstrategyis being set consistently and correctly.Without this, the UB of strategy being uninitialized manifests as a sporadic test failure in mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_strided_conv_2d_nhwc_hwcf.mlir, when run downstream with the right flags (e.g. asan + assertions off). The test sometimes fails with
ne_sub<trivial<dense[0,1]>>.begin' op created with unregistered dialect. It can also be directly observed w/ msan that this uninitialized read is the cause of that issue, but msan causes other problems w/ this test.